Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3509-3521, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38090835

RESUMO

There are two mainstream approaches for object detection: top-down and bottom-up. The state-of-the-art approaches are mainly top-down methods. In this paper, we demonstrate that bottom-up approaches show competitive performance compared with top-down approaches and have higher recall rates. Our approach, named CenterNet, detects each object as a triplet of keypoints (top-left and bottom-right corners and the center keypoint). We first group the corners according to some designed cues and confirm the object locations based on the center keypoints. The corner keypoints allow the approach to detect objects of various scales and shapes and the center keypoint reduces the confusion introduced by a large number of false-positive proposals. Our approach is an anchor-free detector because it does not need to define explicit anchor boxes. We adapt our approach to backbones with different structures, including 'hourglass'-like networks and 'pyramid'-like networks, which detect objects in single-resolution and multi-resolution feature maps, respectively. On the MS-COCO dataset, CenterNet with Res2Net-101 and Swin-Transformer achieve average precisions (APs) of 53.7% and 57.1%, respectively, outperforming all existing bottom-up detectors and achieving state-of-the-art performance. We also design a real-time CenterNet model, which achieves a good trade-off between accuracy and speed, with an AP of 43.6% at 30.5 frames per second (FPS).

3.
Nature ; 619(7970): 533-538, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37407823

RESUMO

Weather forecasting is important for science and society. At present, the most accurate forecast system is the numerical weather prediction (NWP) method, which represents atmospheric states as discretized grids and numerically solves partial differential equations that describe the transition between those states1. However, this procedure is computationally expensive. Recently, artificial-intelligence-based methods2 have shown potential in accelerating weather forecasting by orders of magnitude, but the forecast accuracy is still significantly lower than that of NWP methods. Here we introduce an artificial-intelligence-based method for accurate, medium-range global weather forecasting. We show that three-dimensional deep networks equipped with Earth-specific priors are effective at dealing with complex patterns in weather data, and that a hierarchical temporal aggregation strategy reduces accumulation errors in medium-range forecasting. Trained on 39 years of global data, our program, Pangu-Weather, obtains stronger deterministic forecast results on reanalysis data in all tested variables when compared with the world's best NWP system, the operational integrated forecasting system of the European Centre for Medium-Range Weather Forecasts (ECMWF)3. Our method also works well with extreme weather forecasts and ensemble forecasts. When initialized with reanalysis data, the accuracy of tracking tropical cyclones is also higher than that of ECMWF-HRES.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11502-11520, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37310846

RESUMO

Style-based GANs achieve state-of-the-art results for generating high-quality images, but lack explicit and precise control over camera poses. Recently proposed NeRF-based GANs have made great progress towards 3D-aware image generation. However, the methods either rely on convolution operators which are not rotationally invariant, or utilize complex yet suboptimal training procedures to integrate both NeRF and CNN sub-structures, yielding un-robust, low-quality images with a large computational burden. This article presents an upgraded version called CIPS-3D++, aiming at high-robust, high-resolution and high-efficiency 3D-aware GANs. On the one hand, our basic model CIPS-3D, encapsulated in a style-based architecture, features a shallow NeRF-based 3D shape encoder as well as a deep MLP-based 2D image decoder, achieving robust image generation/editing with rotation-invariance. On the other hand, our proposed CIPS-3D++, inheriting the rotational invariance of CIPS-3D, together with geometric regularization and upsampling operations, encourages high-resolution high-quality image generation/editing with great computational efficiency. Trained on raw single-view images, without any bells and whistles, CIPS-3D++ sets new records for 3D-aware image synthesis, with an impressive FID of 3.2 on FFHQ at the 1024×1024 resolution. In the meantime, CIPS-3D++ runs efficiently and enjoys a low GPU memory footprint so that it can be trained end-to-end on high-resolution images directly, in contrast to previous alternate/progressive methods. Based on the infrastructure of CIPS-3D++, we propose a 3D-aware GAN inversion algorithm named FlipInversion, which can reconstruct the 3D object from a single-view image. We also provide a 3D-aware stylization method for real images based on CIPS-3D++ and FlipInversion. In addition, we analyze the problem of mirror symmetry suffered in training, and solve it by introducing an auxiliary discriminator for the NeRF network. Overall, CIPS-3D++ provides a strong base model that can serve as a testbed for transferring GAN-based image editing methods from 2D to 3D.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 12699-12706, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37145941

RESUMO

Few-shot class-incremental learning (FSCIL) faces the challenges of memorizing old class distributions and estimating new class distributions given few training samples. In this study, we propose a learnable distribution calibration (LDC) approach, to systematically solve these two challenges using a unified framework. LDC is built upon a parameterized calibration unit (PCU), which initializes biased distributions for all classes based on classifier vectors (memory-free) and a single covariance matrix. The covariance matrix is shared by all classes, so that the memory costs are fixed. During base training, PCU is endowed with the ability to calibrate biased distributions by recurrently updating sampled features under supervision of real distributions. During incremental learning, PCU recovers distributions for old classes to avoid 'forgetting', as well as estimating distributions and augmenting samples for new classes to alleviate 'over-fitting' caused by the biased distributions of few-shot samples. LDC is theoretically plausible by formatting a variational inference procedure. It improves FSCIL's flexibility as the training procedure requires no class similarity priori. Experiments on CUB200, CIFAR100, and mini-ImageNet datasets show that LDC respectively outperforms the state-of-the-arts by 4.64%, 1.98%, and 3.97%. LDC's effectiveness is also validated on few-shot learning scenarios.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11856-11868, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37192026

RESUMO

Pre-training on large-scale datasets has played an increasingly significant role in computer vision and natural language processing recently. However, as there exist numerous application scenarios that have distinctive demands such as certain latency constraints and specialized data distributions, it is prohibitively expensive to take advantage of large-scale pre-training for per-task requirements. we focus on two fundamental perception tasks (object detection and semantic segmentation) and present a complete and flexible system named GAIA-Universe(GAIA), which could automatically and efficiently give birth to customized solutions according to heterogeneous downstream needs through data union and super-net training. GAIA is capable of providing powerful pre-trained weights and searching models that conform to downstream demands such as hardware constraints, computation constraints, specified data domains, and telling relevant data for practitioners who have very few datapoints on their tasks. With GAIA, we achieve promising results on COCO, Objects365, Open Images, BDD100 k, and UODB which is a collection of datasets including KITTI, VOC, WiderFace, DOTA, Clipart, Comic, and more. Taking COCO as an example, GAIA is able to efficiently produce models covering a wide range of latency from 16 ms to 53 ms, and yields AP from 38.2 to 46.5 without whistles and bells. GAIA is released at https://github.com/GAIA-vision.

7.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 9284-9305, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37027561

RESUMO

The rapid development of deep learning has made a great progress in image segmentation, one of the fundamental tasks of computer vision. However, the current segmentation algorithms mostly rely on the availability of pixel-level annotations, which are often expensive, tedious, and laborious. To alleviate this burden, the past years have witnessed an increasing attention in building label-efficient, deep-learning-based image segmentation algorithms. This paper offers a comprehensive review on label-efficient image segmentation methods. To this end, we first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels (including no supervision, inexact supervision, incomplete supervision and inaccurate supervision) and supplemented by the types of segmentation problems (including semantic segmentation, instance segmentation and panoptic segmentation). Next, we summarize the existing label-efficient image segmentation methods from a unified perspective that discusses an important question: how to bridge the gap between weak supervision and dense prediction - the current methods are mostly based on heuristic priors, such as cross-pixel similarity, cross-label constraint, cross-view consistency, and cross-image relation. Finally, we share our opinions about the future research directions for label-efficient deep image segmentation.


Assuntos
Algoritmos , Semântica , Processamento de Imagem Assistida por Computador
8.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 9225-9232, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37018583

RESUMO

Batch normalization (BN) is a fundamental unit in modern deep neural networks. However, BN and its variants focus on normalization statistics but neglect the recovery step that uses linear transformation to improve the capacity of fitting complex data distributions. In this paper, we demonstrate that the recovery step can be improved by aggregating the neighborhood of each neuron rather than just considering a single neuron. Specifically, we propose a simple yet effective method named batch normalization with enhanced linear transformation (BNET) to embed spatial contextual information and improve representation ability. BNET can be easily implemented using the depth-wise convolution and seamlessly transplanted into existing architectures with BN. To our best knowledge, BNET is the first attempt to enhance the recovery step for BN. Furthermore, BN is interpreted as a special case of BNET from both spatial and spectral views. Experimental results demonstrate that BNET achieves consistent performance gains based on various backbones in a wide range of visual tasks. Moreover, BNET can accelerate the convergence of network training and enhance spatial information by assigning important neurons with large weights accordingly.

9.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10317-10330, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37030795

RESUMO

In order to enable the model to generalize to unseen "action-objects" (compositional action), previous methods encode multiple pieces of information (i.e., the appearance, position, and identity of visual instances) independently and concatenate them for classification. However, these methods ignore the potential supervisory role of instance information (i.e., position and identity) in the process of visual perception. To this end, we present a novel framework, namely Progressive Instance-aware Feature Learning (PIFL), to progressively extract, reason, and predict dynamic cues of moving instances from videos for compositional action recognition. Specifically, this framework extracts features from foreground instances that are likely to be relevant to human actions (Position-aware Appearance Feature Extraction in Section III-B1), performs identity-aware reasoning among instance-centric features with semantic-specific interactions (Identity-aware Feature Interaction in Section III-B2), and finally predicts instances' position from observed states to force the model into perceiving their movement (Semantic-aware Position Prediction in Section III-B3). We evaluate our approach on two compositional action recognition benchmarks, namely, Something-Else and IKEA-Assembly. Our approach achieves consistent accuracy gain beyond off-the-shelf action recognition algorithms in terms of both ground truth and detected position of instances.


Assuntos
Algoritmos , Percepção Visual , Humanos , Aprendizagem
10.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 9454-9468, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37022836

RESUMO

With convolution operations, Convolutional Neural Networks (CNNs) are good at extracting local features but experience difficulty to capture global representations. With cascaded self-attention modules, vision transformers can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take both advantages of convolution operations and self-attention mechanisms for enhanced representation learning. Conformer roots in feature coupling of CNN local features and transformer global representations under different resolutions in an interactive fashion. Conformer adopts a dual structure so that local details and global dependencies are retained to the maximum extent. We also propose a Conformer-based detector (ConformerDet), which learns to predict and refine object proposals, by performing region-level feature coupling in an augmented cross-attention fashion. Experiments on ImageNet and MS COCO datasets validate Conformer's superiority for visual recognition and object detection, demonstrating its potential to be a general backbone network.


Assuntos
Algoritmos , Aprendizagem , Redes Neurais de Computação
11.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 6955-6968, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33108281

RESUMO

Group activity recognition (GAR) is a challenging task aimed at recognizing the behavior of a group of people. It is a complex inference process in which visual cues collected from individuals are integrated into the final prediction, being aware of the interaction between them. This paper goes one step further beyond the existing approaches by designing a Hierarchical Graph-based Cross Inference Network (HiGCIN), in which three levels of information, i.e., the body-region level, person level, and group-activity level, are constructed, learned, and inferred in an end-to-end manner. Primarily, we present a generic Cross Inference Block (CIB), which is able to concurrently capture the latent spatiotemporal dependencies among body regions and persons. Based on the CIB, two modules are designed to extract and refine features for group activities at each level. Experiments on two popular benchmarks verify the effectiveness of our approach, particularly in the ability to infer with multilevel visual cues. In addition, training our approach does not require individual action labels to be provided, which greatly reduces the amount of labor required in data annotation.

12.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3753-3767, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35604978

RESUMO

Self-supervised learning based on instance discrimination has shown remarkable progress. In particular, contrastive learning, which regards each image as well as its augmentations as an individual class and tries to distinguish them from all other images, has been verified effective for representation learning. However, conventional contrastive learning does not model the relation between semantically similar samples explicitly. In this paper, we propose a general module that considers the semantic similarity among images. This is achieved by expanding the views generated by a single image to Cross-Samples and Multi-Levels, and modeling the invariance to semantically similar images in a hierarchical way. Specifically, the cross-samples are generated by a data mixing operation, which is constrained within samples that are semantically similar, while the multi-level samples are expanded at the intermediate layers of a network. In this way, the contrastive loss is extended to allow for multiple positives per anchor, and explicitly pulling semantically similar images together at different layers of the network. Our method, termed as CSML, has the ability to integrate multi-level representations across samples in a robust way. CSML is applicable to current contrastive based methods and consistently improves the performance. Notably, using MoCo v2 as an instantiation, CSML achieves 76.6% top-1 accuracy with linear evaluation using ResNet-50 as backbone, 66.7% and 75.1% top-1 accuracy with only 1% and 10% labels, respectively. All these numbers set the new state-of-the-art. The code is available at https://github.com/haohang96/CSML.

13.
IEEE Trans Pattern Anal Mach Intell ; 43(9): 2953-2970, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33591909

RESUMO

Differentiable architecture search (DARTS) enables effective neural architecture search (NAS) using gradient descent, but suffers from high memory and computational costs. In this paper, we propose a novel approach, namely Partially-Connected DARTS (PC-DARTS), to achieve efficient and stable neural architecture search by reducing the channel and spatial redundancies of the super-network. In the channel level, partial channel connection is presented to randomly sample a small subset of channels for operation selection to accelerate the search process and suppress the over-fitting of the super-network. Side operation is introduced for bypassing (non-sampled) channels to guarantee the performance of searched architectures under extremely low sampling rates. In the spatial level, input features are down-sampled to eliminate spatial redundancy and enhance the efficiency of the mixed computation for operation selection. Furthermore, edge normalization is developed to maintain the consistency of edge selection based on channel sampling with the architectural parameters for edges. Theoretical analysis shows that partial channel connection and parameterized side operation are equivalent to regularizing the super-network on the weights and architectural parameters during bilevel optimization. Experimental results demonstrate that the proposed approach achieves higher search speed and training stability than DARTS. PC-DARTS obtains a top-1 error rate of 2.55 percent on CIFAR-10 with 0.07 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.1 percent on ImageNet (under the mobile setting) within 2.8 GPU-days.

14.
IEEE Trans Image Process ; 30: 2538-2548, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33481714

RESUMO

Natural language moment localization aims at localizing video clips according to a natural language description. The key to this challenging task lies in modeling the relationship between verbal descriptions and visual contents. Existing approaches often sample a number of clips from the video, and individually determine how each of them is related to the query sentence. However, this strategy can fail dramatically, in particular when the query sentence refers to some visual elements that appear outside of, or even are distant from, the target clip. In this paper, we address this issue by designing an Interaction-Integrated Network (I2N), which contains a few Interaction-Integrated Cells (I2Cs). The idea lies in the observation that the query sentence not only provides a description to the video clip, but also contains semantic cues on the structure of the entire video. Based on this, I2Cs go one step beyond modeling short-term contexts in the time domain by encoding long-term video content into every frame feature. By stacking a few I2Cs, the obtained network, I2N, enjoys an improved ability of inference, brought by both (I) multi-level correspondence between vision and language and (II) more accurate cross-modal alignment. When evaluated on a challenging video moment localization dataset named DiDeMo, I2N outperforms the state-of-the-art approach by a clear margin of 1.98%. On other two challenging datasets, Charades-STA and TACoS, I2N also reports competitive performance.

15.
IEEE Trans Med Imaging ; 39(2): 514-525, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31352338

RESUMO

We aim at segmenting a wide variety of organs, including tiny targets (e.g., adrenal gland), and neoplasms (e.g., pancreatic cyst), from abdominal CT scans. This is a challenging task in two aspects. First, some organs (e.g., the pancreas), are highly variable in both anatomy and geometry, and thus very difficult to depict. Second, the neoplasms often vary a lot in its size, shape, as well as its location within the organ. Third, the targets (organs and neoplasms) can be considerably small compared to the human body, and so standard deep networks for segmentation are often less sensitive to these targets and thus predict less accurately especially around their boundaries. In this paper, we present an end-to-end framework named recurrent saliency transformation network (RSTN) for segmenting tiny and/or variable targets. The RSTN is a coarse-to-fine approach that uses prediction from the first (coarse) stage to shrink the input region for the second (fine) stage. A saliency transformation module is inserted between these two stages so that 1) the coarse-scaled segmentation mask can be transferred as spatial weights and applied to the fine stage and 2) the gradients can be back-propagated from the loss layer to the entire network so that the two stages are optimized in a joint manner. In the testing stage, we perform segmentation iteratively to improve accuracy. In this extended journal paper, we allow a gradual optimization to improve the stability of the RSTN, and introduce a hierarchical version named H-RSTN to segment tiny and variable neoplasms such as pancreatic cysts. Experiments are performed on several CT datasets including a public pancreas segmentation dataset, our own multi-organ dataset, and a cystic pancreas dataset. In all these cases, the RSTN outperforms the baseline (a stage-wise coarse-to-fine approach) significantly. Confirmed by the radiologists in our team, these promising segmentation results can help early diagnosis of pancreatic cancer. The code and pre-trained models of our project were made available at https://github.com/198808xc/OrganSegRSTN.


Assuntos
Abdome/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Bases de Dados Factuais , Humanos , Interpretação de Imagem Assistida por Computador , Neoplasias Pancreáticas/diagnóstico por imagem
17.
IEEE Trans Image Process ; 24(11): 4287-98, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25974934

RESUMO

State-of-the-art web image search frameworks are often based on the bag-of-visual-words (BoVWs) model and the inverted index structure. Despite the simplicity, efficiency, and scalability, they often suffer from low precision and/or recall, due to the limited stability of local features and the considerable information loss on the quantization stage. To refine the quality of retrieved images, various postprocessing methods have been adopted after the initial search process. In this paper, we investigate the online querying process from a graph-based perspective. We introduce a heterogeneous graph model containing both image and feature nodes explicitly, and propose an efficient reranking approach consisting of two successive modules, i.e., incremental query expansion and image-feature voting, to improve the recall and precision, respectively. Compared with the conventional reranking algorithms, our method does not require using geometric information of visual words, therefore enjoys low consumptions of both time and memory. Moreover, our method is independent of the initial search process, and could cooperate with many BoVW-based image search pipelines, or adopted after other postprocessing algorithms. We evaluate our approach on large-scale image search tasks and verify its competitive search performance.

18.
IEEE Trans Image Process ; 23(5): 1994-2008, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24710400

RESUMO

In image classification tasks, one of the most successful algorithms is the bag-of-features (BoFs) model. Although the BoF model has many advantages, such as simplicity, generality, and scalability, it still suffers from several drawbacks, including the limited semantic description of local descriptors, lack of robust structures upon single visual words, and missing of efficient spatial weighting. To overcome these shortcomings, various techniques have been proposed, such as extracting multiple descriptors, spatial context modeling, and interest region detection. Though they have been proven to improve the BoF model to some extent, there still lacks a coherent scheme to integrate each individual module together. To address the problems above, we propose a novel framework with spatial pooling of complementary features. Our model expands the traditional BoF model on three aspects. First, we propose a new scheme for combining texture and edge-based local features together at the descriptor extraction level. Next, we build geometric visual phrases to model spatial context upon complementary features for midlevel image representation. Finally, based on a smoothed edgemap, a simple and effective spatial weighting scheme is performed to capture the image saliency. We test the proposed framework on several benchmark data sets for image classification. The extensive results show the superior performance of our algorithm over the state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...